我们研究了实时的协作机器人(Cobot)处理,Cobot在人类命令下操纵工件。当人类直接处理工件时,这是有用的。但是,在可能的操作中难以使COBOT易于命令和灵活。在这项工作中,我们提出了一个实时协作机器人处理(RTCOHand)框架,其允许通过用户定制的动态手势控制COBOT。由于用户,人类运动不确定性和嘈杂的人类投入的变化,这很难。我们将任务塑造为概率的生成过程,称为条件协作处理过程(CCHP),并从人类的合作中学习。我们彻底评估了CCHP的适应性和稳健性,并将我们的方法应用于Kinova Gen3机器人手臂的实时Cobot处理任务。我们实现了与经验丰富和新用户的无缝人员合作。与古典控制器相比,RTCEHAND允许更复杂的操作和更低的用户认知负担。它还消除了对试验和错误的需求,在安全关键任务中呈现。
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Electronic Health Records (EHRs) hold detailed longitudinal information about each patient's health status and general clinical history, a large portion of which is stored within the unstructured text. Temporal modelling of this medical history, which considers the sequence of events, can be used to forecast and simulate future events, estimate risk, suggest alternative diagnoses or forecast complications. While most prediction approaches use mainly structured data or a subset of single-domain forecasts and outcomes, we processed the entire free-text portion of EHRs for longitudinal modelling. We present Foresight, a novel GPT3-based pipeline that uses NER+L tools (i.e. MedCAT) to convert document text into structured, coded concepts, followed by providing probabilistic forecasts for future medical events such as disorders, medications, symptoms and interventions. Since large portions of EHR data are in text form, such an approach benefits from a granular and detailed view of a patient while introducing modest additional noise. On tests in two large UK hospitals (King's College Hospital, South London and Maudsley) and the US MIMIC-III dataset precision@10 of 0.80, 0.81 and 0.91 was achieved for forecasting the next biomedical concept. Foresight was also validated on 34 synthetic patient timelines by 5 clinicians and achieved relevancy of 97% for the top forecasted candidate disorder. Foresight can be easily trained and deployed locally as it only requires free-text data (as a minimum). As a generative model, it can simulate follow-on disorders, medications and interventions for as many steps as required. Foresight is a general-purpose model for biomedical concept modelling that can be used for real-world risk estimation, virtual trials and clinical research to study the progression of diseases, simulate interventions and counterfactuals, and for educational purposes.
translated by 谷歌翻译
The deployment of machine learning models in safety-critical applications comes with the expectation that such models will perform well over a range of contexts (e.g., a vision model for classifying street signs should work in rural, city, and highway settings under varying lighting/weather conditions). However, these one-size-fits-all models are typically optimized for average case performance, encouraging them to achieve high performance in nominal conditions but exposing them to unexpected behavior in challenging or rare contexts. To address this concern, we develop a new method for training context-dependent models. We extend Bridge-Mode Connectivity (BMC) (Garipov et al., 2018) to train an infinite ensemble of models over a continuous measure of context such that we can sample model parameters specifically tuned to the corresponding evaluation context. We explore the definition of context in image classification tasks through multiple lenses including changes in the risk profile, long-tail image statistics/appearance, and context-dependent distribution shift. We develop novel extensions of the BMC optimization for each of these cases and our experiments demonstrate that model performance can be successfully tuned to context in each scenario.
translated by 谷歌翻译
我们通过用基于字符的模型对源文本的最小删除来检查英语 - 英语和中文 - 英语内神经机器翻译中罕见但严重的错误的诱导。通过删除单个字符,我们发现我们可以在翻译中引起严重的错误。我们对这些错误进行分类,并比较删除单个字符和单词的结果。我们还研究了训练数据大小对这些最小扰动引起的病理病例的数量和类型的影响,从而发现了显着差异。
translated by 谷歌翻译
采样在机器学习方法中无处不在。由于大数据集和模型复杂性的增长,我们希望在训练A表示时学习和适应采样过程。为了实现这一宏伟的目标,已经提出了各种抽样技术。但是,他们中的大多数要么使用固定采样方案,要么基于简单的启发式方法调整采样方案。他们不能选择在不同阶段进行模型培训的最佳样本。受认知科学中的“思考,快速和系统2)的启发,我们提出了一种奖励指导的采样策略,称为自适应样本,并奖励(ASR)来应对这一挑战。据我们所知,这是利用强化学习(RL)解决代表学习中抽样问题的第一项工作。我们的方法最佳地调整了采样过程以实现最佳性能。我们通过基于距离的采样来探索样品之间的地理关系,以最大程度地提高整体累积奖励。我们将ASR应用于基于相似性的损失函数中的长期抽样问题。信息检索和聚类中的经验结果证明了ASR在不同数据集中的出色性能。我们还讨论了一种令人着迷的现象,我们将其称为实验中的“ ASR重力”。
translated by 谷歌翻译
科学家越来越依靠Python工具使用丰富的,类似于Numpy的表达式执行可扩展的分布式内存阵列操作。但是,这些工具中的许多工具都依赖于针对抽象任务图进行了优化的动态调度程序,这些调度图通常遇到内存和网络带宽相关的瓶颈,这是由于亚最佳数据和操作员的放置决策。在消息传递接口(MPI)(例如Scalapack和Slate)上构建的工具具有更好的缩放属性,但是这些解决方案需要使用专门的知识。在这项工作中,我们提出了NUMS,这是一个数组编程库,可在基于任务的分布式系统上优化类似Numpy的表达式。这是通过称为负载模拟层次调度(LSHS)的新型调度程序来实现的。 LSHS是一种本地搜索方法,可通过最大程度地减少分布式系统中任何给定节点上的最大内存和网络加载来优化操作员放置。再加上用于负载平衡数据布局的启发式,我们的方法能够在某些常见的数值操作上达到通信下限,我们的经验研究表明,LSHS通过减少2倍的降低2倍来增强RAR上的性能,需要减少4倍的内存, ,在逻辑回归问题上减少10倍的执行时间。在Terabyte尺度数据上,NUMS在DGEMM上实现了竞争性能,与Dask ML和Spark的Mllib相比,在键盘分解的密钥操作中,DASK高达20倍的速度以及logistic回归的2倍加速。
translated by 谷歌翻译
训练大型神经网络(NN)模型需要广泛的记忆资源,而激活压缩训练(ACT)是减少训练记忆足迹的一种有前途的方法。本文介绍了GACT,这是一个ACT框架,旨在支持具有有限域知识的通用NN体系结构的广泛机器学习任务。通过分析ACT近似梯度的线性化版本,我们证明了GACT的收敛性,而没有有关操作员类型或模型体系结构的先验知识。为了使训练保持稳定,我们提出了一种算法,该算法通过估计运行时对梯度的影响来决定每个张量的压缩比。我们将GACT实施为Pytorch库,很容易适用于任何NN体系结构。GACT将卷积NN,变压器和图形NNS的激活记忆降低到8.1倍,从而使4.2倍至24.7倍的训练能够较大,而精度损失可忽略不计。
translated by 谷歌翻译
具有多个层次结构的推理能力是用于自然语言处理的连续感应偏差的有吸引力和理想的特性。最先进的变形金刚和LSTM架构隐含地编码这些偏差吗?要回答这一点,我们提出了一个诊断数据集,用于系统地评估最先进的神经序列模型中的分层推理。虽然已经有先前的评估框架如列表或逻辑推断,但我们的工作提出了一种新颖且更自然的环境,其中我们的模型学会使用多个显式层次结构而不是一个,即需要执行长期执行能力的原因序列记忆,关系推理的序列结构。因此,由一组严谨的实验支持,我们展示了(1)变压器和LSTM模型在系统泛化中令人惊讶地失败,(2)在层次结构之间增加的引用,变压器不会比随机更好地执行。
translated by 谷歌翻译
诱导顺序数据的潜在树结构是今天NLP研究景观的新出现趋势,主要是由最近的方法(如Gumbel LSTM和有序神经元)(LSTM)所普及。本文提出了Fasttrees,一种新的通用神经模块,用于快速序列编码。与最先前的作品不同,考虑到树归类所需的复发,我们的工作探讨了并行树归纳的概念,即,通过分层电感偏置的并行,非自动增加时尚的分层感应偏差。为此,我们提出的Fasttrees在四个建立良好的序列建模任务中实现了对LSTM的竞争或卓越的性能,即语言建模,逻辑推断,情感分析和自然语言推断。此外,我们表明FastTrees模块可以应用于增强变压器模型,实现三个序列转换任务(机器翻译,主语 - 动词协议和数学语言理解)实现性能增益,为模块化树感应模块铺平了道路。总的来说,我们以+ 4%的逻辑推理任务和数学语言理解+ 8%的现有最先进的模型。
translated by 谷歌翻译